Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor turbomind attention by precomputing cos/sin #2801

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

irexyc
Copy link
Collaborator

@irexyc irexyc commented Nov 25, 2024

Motivation

Calculate cos/sin in advance and reduce the parameters of the prefill/decode kernel

@lvhan028 lvhan028 changed the title Use precomputed cos/sin Refactor turbomind attention by precomputing cos/sin Nov 27, 2024
@@ -81,6 +83,11 @@ void UnifiedDecoder<T>::forwardSelfAttn(T* attn_io,
inputs.insert("h_cu_q_len", {MEMORY_CPU, TYPE_INT32, {batch_size + 1}, h_cu_q_len_});
inputs.insert("h_cu_k_len", {MEMORY_CPU, TYPE_INT32, {batch_size + 1}, h_cu_k_len_});

if (rotary_emb_) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any case that rotary_emb_ a nullptr?

@@ -59,22 +59,45 @@ struct MoeParam {
std::vector<int> expert_num;
};

enum class RotaryScalingType
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RotaryScalingType -> RopeType


struct InnerYarnRopeParam {
float attention_factor;
float yarn_ramp_inv_factor_div_2;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove the prefix "yarn_"

};

struct InnerLlama3RopeParam {
float llama3_inv_scaling_factor;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the prefix "llama3_" can be removed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants